Transparent Model Distillation

نویسندگان

  • Sarah Tan
  • Rich Caruana
  • Giles Hooker
  • Albert Gordo
چکیده

Model distillation was originally designed to distill knowledge from a large, complex teacher model to a faster, simpler student model without significant loss in prediction accuracy. We investigate model distillation for another goal – transparency – investigating if fully-connected neural networks can be distilled into models that are transparent or interpretable in some sense. Our teacher models are multilayer perceptrons, and we try two types of student models: (1) treebased generalized additive models (GA2Ms), a type of boosted, short tree (2) gradient boosted trees (GBTs). More transparent student models are forthcoming. Our results are not yet conclusive. GA2Ms show some promise for distilling binary classification teachers, but not yet regression. GBTs are not “directly” interpretable but may be promising for regression teachers. GA2M models may provide a computationally viable alternative to additive decomposition methods for global function approximation.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Auditing Black-Box Models Using Transparent Model Distillation With Side Information

Black-box risk scoring models permeate our lives, yet are typically proprietary or opaque. We propose a transparent model distillation approach to audit such models. Model distillation was first introduced to transfer knowledge from a large, complex teacher model to a faster, simpler student model without significant loss in prediction accuracy. To this we add a third criterion transparency. To...

متن کامل

Successful Machine Learning Strategies in an Environment of Intermittent Data Availability

This report introduces an online, fully adaptive HybridANN approach for the estimation of product properties across multiple crude distillation processes. An extension of eTS+ is considered, utilizing recursive density estimation (RDE), eClustering+ and fuzzily weighted recursive leastsquares (fwRLS) for one-step prediction of fuel quality in a data-sparse, sensor-heavy environment.[49] eTS+ ha...

متن کامل

A Comparison between Kubelka-Munk and Geometric Models for Prediction of Reflectance Factor of Transparent Fibers

The reflectance factors of transparent fibers, free delustering agent, are predicted by geometric as well as Kubelka-Munk models. Transparent fibers are simulated by a net of glass capillary tubes containing different solutions of dyestuffs. Based on the results, prediction of the reflectance factor of capillary net by geometric model is relatively better than those obtained from Kubelka-Munk...

متن کامل

Model Predictive Inferential Control of a Distillation Column

Typical production objectives in distillation process require the delivery of products whose compositions meet certain specifications. The distillation control system, therefore, must hold product compositions as near the set points as possible in faces of upset. In this project, inferential model predictive control, that utilizes an artificial neural network estimator and model predictive cont...

متن کامل

A Comparison between Kubelka-Munk and Geometric Models for Prediction of Reflectance Factor of Transparent Fibers

The reflectance factors of transparent fibers, free delustering agent, are predicted by geometric as well as Kubelka-Munk models.&#10 Transparent fibers are simulated by a net of glass capillary tubes containing different solutions of dyestuffs. Based on the results, prediction of the reflectance factor of capillary net by geometric model is relatively better than those obtained from Kubelka-Mu...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1801.08640  شماره 

صفحات  -

تاریخ انتشار 2018